Symbol Grounding in Multimodal Sequences using Recurrent Neural Networks
نویسندگان
چکیده
The problem of how infants learn to associate visual inputs, speech, and internal symbolic representation has long been of interest in Psychology, Neuroscience, and Artificial Intelligence. A priori, both visual inputs and auditory inputs are complex analog signals with a large amount of noise and context, and lacking of any segmentation information. In this paper, we address a simple form of this problem: the association of one visual input and one auditory input with each other. We show that the presented model learns both segmentation, recognition and symbolic representation under two simple assumptions: (1) that a symbolic representation exists, and (2) that two different inputs represent the same symbolic structure. Our approach uses two Long Short-Term Memory (LSTM) networks for multimodal sequence learning and recovers the internal symbolic space using an EM-style algorithm. We compared our model against LSTM in three different multimodal datasets: digit, letter and word recognition. The performance of our model reached similar results to LSTM.
منابع مشابه
Discretization of Series of Communication Signals in Noisy Environment by Reinforcement Learning
Thinking about the “Symbol Grounding Problem” and the brain structure of living things, the author believes that it is the best solution for generating communication in robot-like systems to use a neural network that is trained based on reinforcement learning. As the first step of the research of symbol emergence using neural network, it was examined that parallel analog communication signals a...
متن کاملLanguage Acquisition and Symbol Grounding Transfer with Neural Networks and Cognitive Robots [IJCNN1323]
Neural networks have been proposed as an ideal cognitive modeling methodology to deal with the symbol grounding problem. More recently, such neural network approaches have been incorporated in studies based on cognitive agents and robots. In this paper we present a new model of symbol grounding transfer in cognitive robots. Language learning simulations demonstrate that robots are able to acqui...
متن کاملEvolving Distributed Representations for Language with Self-Organizing Maps
We present a neural-competitive learning model of language evolution in which several symbol sequences compete to signify a given propositional meaning. Both symbol sequences and propositional meanings are represented by high-dimensional vectors of real numbers. A neural network learns to map between the distributed representations of the symbol sequences and the distributed representations of ...
متن کاملHill Climbing in Recurrent Neural Networks for Learning the abc Language
A simple recurrent neural network is trained on a one-step look ahead prediction task for symbol sequences of the context-sensitive abc language. Using an evolutionary hill climbing strategy for incremental learning the network learns to predict sequences of strings up to depth n = 12. Experiments and the algorithms used are described. The activation of the hidden units of the trained network i...
متن کاملSymbol Grounding Association in Multimodal Sequences with Missing Elements
In this paper, we extend a symbolic association framework to being able to handle missing elements in multimodal sequences. The general scope of the work is the symbolic associations of object-word mappings as it happens in language development on infants. This scenario has been long interested by Artificial Intelligence, Psychology and Neuroscience. In this work, we extend a recent approach fo...
متن کامل